Data Collection Guidelines for Consistent Evaluation of Data from Verification and Monitoring Safeguard Systems

نویسندگان

  • Roberto Lenarduzzi
  • Kim Castleberry
  • Michael Whitaker
چکیده

One of the several activities the International Atomic Energy Agency (IAEA) inspectors perform in the verification process of Safeguard operations is the review and correlation of data from different sources. This process is often complex due to the different forms in which the data is presented. This paper describes some of the elements that are necessary to create a ‘standardized’ structure for the verification of data. When properly collected and formatted, data can be analyzed with off-the shelf software applications using customized macros to automate the commands for the desired analysis. The standardized-data collection methodology is based on instrumentation guidelines as well as data structure elements, such as verifiable timing of data entry, automated data logging, identification codes, and others. The identification codes are used to associate data items with their sources and to correlate them with items from other data logging activities. The addition of predefined parameter ranges allows automated evaluation with the capability to provide a data summary, a cross-index of all data related to a specific event. Instances of actual databases are used as examples. The data collection guidelines described in this paper facilitate the use of data from a variety of instrumentation platforms and also allow the instrumentation itself to be more easily applied in subsequent monitoring applications INTRODUCTION A lot of effort has been and is being applied to the development of instrumentation to measure and log parameters and events that will aid in the verification of international safeguard operations. In order to be applied to safeguard monitoring these measurements are usually designed not only to provide the actual data from the measurement, but also to use the same data to verify the validity of another measurement or of a process declaration entered in a logbook. Most safeguards systems are developed through a coordinated effort from several teams with each specializing in a particular monitoring discipline. A complication arises from the fact that the teams are almost always from different organizations and have their own established procedures for data analysis and presentation. To further complicate the process, the recipient of the results may also be a team that has its own set of format expectations. In the monitoring and verification of safeguard operations, the team of inspectors must analyze data gathered by all of the teams. To accomplish their task, the inspectors must sort and correlate results form the various measurements even though the results are often presented in different formats (tables, graphs, photos, etc). Another difficulty that is often encountered is the lack of consistency in the basic parameters, such as time of collection, event names, data source identification, etc. There are already visible trends in the data acquisition and automation industries to standardize the structure of the data being collected or the commands required to automate processes. The following section describes an example of such a data correlating effort. Subsequently a few, guidelines are presented on how to plan data collection to facilitate automated analysis, correlation, and results presentation. EXAMPLES OF DATA REVIEW Early in the verification of Highly Enriched Uranium (HEU) down blending experiment at the Portsmouth Gaseous Diffusion Plant, it was realized that the effort required from inspectors to correlate data from the various measurements and logbooks would be complex. One of the tasks the IAEA inspectors performed was the verification of the amount of HEU removed from feed cylinders. To accomplish this, they needed to correlate information from 5 different data sets: • The operator declarations manually entered into a 'mailbox' computer database, which was not done in real time because of computer accessibility issues. • Weight measurements from the Load-Cell based Weight Monitoring System (LCWS), which continuously logged the weight of the material in the cylinders at the feed station. • Still photos from the video surveillance cameras, which stored images triggered by motion in the feed area or by sudden large changes in weight detected by the LCWS and at specified intervals. • The spot checks on the HEU feed cylinder weights performed by the inspectors. • Non-Destructive Assay (NDA) measurement of HEU feed cylinder also randomly performed by the inspectors. 1 The change of feed cylinder provides a good example of how the correlation was accomplished. To verify this event the inspectors: • printed the operator log entries from the mailbox computer, which provided a description and data related to the entire activity; • compared the log entries with data from the LCWS along with the list of detected events caused by weight perturbations (old cylinder removed from platform, new cylinder placed on platform, operator stepped on platform, etc); and • reviewed the images from the video surveillance. In the above operations, each of the data records had a time stamp, but the correlation was hampered because the various system clocks were not synchronized well. In addition the times listed in the operator log were usually related to the beginning of an operation, and these were frequently off by several minutes with respect to the time listed in computerlogged data. Another correlation example involves the spotcheck weight measurement performed by the inspectors. These measurements had to be compared to both the printed operator log entries and the weight data provided by the LCWS. It is evident from these brief examples that the inspectors needed to understand the structure of each measurement database in order to correlate the proper data and thus verify the validity of the claimed operations. Since verifications requirements can vary greatly from site to site, starting the process can involve a significant learning curve for the inspectors. SUGGESTED DATA COLLECTION AND PRESENTATION GUIDELINES In designing measurement systems for safeguard operations, significant computing power is often applied to collect large amounts of data, but very little thought is put into using the same power to transform this data into useful information. The following paragraphs suggest a few guidelines that can facilitate the use of computers to automate the correlation of data sets. Although the implementation of some of the guidelines may be impractical or not cost effective, their intent can help in planning a more efficient data structure. 1. Off-the-Shelf Software Readable Data All electronic data should be stored in a format that is readable by commercial off-the-shelf software such as spreadsheet applications (e.g. Microsoft Excel). Tab delimited ASCII files can be imported into spreadsheet applications a well as text editors, and word processors. This file type affords the widest possible capability with existing office automation software. With the addition of custom macros, most off-the-shelf applications can perform repetitive and even complex operations with ease. 2. Database Record Structure Planning the structure of data records is usually necessary if automated analysis is desired. Even though monitoring or experimental data needs will vary significantly from task to task, there are usually some common entries, such as date, time, location, experiment ID, etc., which can be positioned consistently in the record sequence. In addition, the depth of decimal resolution for numeric data should be specified, as well as the format for basic data and for the time stamp associated with events and measurement readings. 3. 'Standardized' Variables Data variables representing the same parameter should be consistently identified and represented. While it is typical for a parameter such as weight to be identified with the same name in different experiments, the use of Metric units in one case and English units in another should be avoided. Industry has encountered compatibility problems in large data acquisition and system automation projects where products from different manufacturers must be integrated into a single monitoring or control system. This is especially true in cases where it would not be practical or cost effective for a single manufacturer to produce the entire system. An example in variable standardization that can be referenced as a model is the definition of standard variables developed by Echelon Corporation, creator of LonWorks. These standard variables were defined in conjunction with a series of companies that have adopted the use of Echelon products. LonMark is the association of LonWorks based product manufacturers and users, and their goal is to establish guidelines for product interoperability at the application layer. Part of these guidelines is a list of Standard Network Variable Types (SNVT) that are associated with measurements, status information, or commands to be exchanged among products. These variable types are identified by a uniquely assigned number. Table 1 shows the entries for a few of the presently defined variables. The total list is managed and published by the LonMark association. There is also a published petitioning process for the official approval of new data types. The related interoperability guidelines along with the variable type definitions have been in effect for several years and have probably had a significant positive influence in helping the automation industry to remain competitive. Table 1 Example of entries in LonMark's Master SNVT List Measurement Name Range (resolution) SNVT # Mass SNVT_mass 0..6,553.5 grams (0.1 g) 23 SNVT_mass_f 0 .. 1E38 g 56 SNVT_mass_kilo 0..6,553.5 Kg (0.1 Kg) 24 SNVT_mass_mega 0..6,553.5 metric tons (0.1 ton) 25 SNVT_mass_mil 0..6,553.5 milligrams (0.1 mg) 26

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Development, and evaluation of drug monitoring system for multiple sclerosis patients

Background and Aim: Taking a wide range of medications in Multiple sclerosis (MS) patients can lead to side effects and drug interactions. Therefore, the use of intelligent systems such as drug monitoring systems can help in the effective and timely treatment of MS disease. In this regard, the present study was conducted to design, development, and evaluation of the drug monitoring system for m...

متن کامل

Requirements for Designing a Wearable Smart Blanket System for Monitoring Patients in Ambulance

Introduction: Nowadays, smart systems and advanced tools such as wearable systems have grown significantly in order to monitor patients and keep their condition under control. The aim of this study was to determine the requirements for designing a wearable smart blanket system (WSBS) to monitor patients in ambulance instantaneously. Method: After reviewing the characteristics of wearable system...

متن کامل

Requirements for Designing a Wearable Smart Blanket System for Monitoring Patients in Ambulance

Introduction: Nowadays, smart systems and advanced tools such as wearable systems have grown significantly in order to monitor patients and keep their condition under control. The aim of this study was to determine the requirements for designing a wearable smart blanket system (WSBS) to monitor patients in ambulance instantaneously. Method: After reviewing the characteristics of wearable system...

متن کامل

Regional Evaluation of Hydrometric Monitoring Stations through Using Entropy Theory

Proper design and operation of monitoring systems for water resources management is one of themost important issues of water quality and quantity and accuracy and adequacy of data. The properevaluation of these data has a determining role in the correctand consistent decisions in the areacovered by the system. Therefore, determining proper distribution and number of monitoringnetwork stations a...

متن کامل

Optimal Feature Selection for Data Classification and Clustering: Techniques and Guidelines

In this paper, principles and existing feature selection methods for classifying and clustering data be introduced. To that end, categorizing frameworks for finding selected subsets, namely, search-based and non-search based procedures as well as evaluation criteria and data mining tasks are discussed. In the following, a platform is developed as an intermediate step toward developing an intell...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999